Algorithm

DINO

DINO Framework

Abstract: In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) [18] that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder [31], multi-crop training [10], and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.

Own Summary: In this paper authors show effectiveness of the combination of DINO framework and ViT based architectures such as ViT and DEIT. There is no contrastive training nor negative pairs, rather ideas such as momentum encoder and multi-crop augmention from BYOL and SWAV respectively are adapted. They use distillation with a teacher-student configuration, and avoid representation collapse by centering and sharpening target distributions generated by the teacher. 2 large views (~50%) are used as targets and all views (2 large, 4 small) are used for predictions similar to SWAV. Centering values and teacher parameters are updated via ema (exponential moving average).

class DINOHead[source]

DINOHead(in_dim, out_dim, use_bn=False, norm_last_layer=True, nlayers=3, hidden_dim=2048, bottleneck_dim=256) :: Module

copy.deepcopy: RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment

https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html https://pytorch.org/docs/stable/generated/torch.nn.GELU.html

get_dino_aug_pipelines[source]

get_dino_aug_pipelines(num_crops=(2, 4), crop_sizes=(224, 96), min_scales=(0.4, 0.05), max_scales=(1.0, 0.4), rotate=True, jitter=True, bw=True, blur=True, resize_ratio=(0.75, 1.3333333333333333), rotate_deg=30, jitter_s=0.6, blur_s=(4, 32), same_on_batch=False, flip_p=0.5, rotate_p=0.3, jitter_p=0.3, bw_p=0.3, blur_p=0.3, stats=([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), cuda=True, xtra_tfms=[])

aug_pipelines = get_dino_aug_pipelines()

class DINOModel[source]

DINOModel(student, teacher) :: Module

Same as nn.Module, but no need for subclasses to call super().__init__

bs = 4
x_large = [torch.randn(4,3,224,224)]*2
x_small = [torch.randn(4,3,96,96)]*4
deits16 = deit_small(patch_size=16, drop_path_rate=0.1)
deits16 = MultiCropWrapper(deits16)
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
student_model = nn.Sequential(deits16,dino_head)

deits16 = deit_small(patch_size=16)
deits16 = MultiCropWrapper(deits16)
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
teacher_model = nn.Sequential(deits16,dino_head)

dino_model = DINOModel(student_model, teacher_model)
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:3063: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed "
dino_model.student[1]
DINOHead(
  (mlp): Sequential(
    (0): Linear(in_features=384, out_features=2048, bias=True)
    (1): GELU()
    (2): Linear(in_features=2048, out_features=2048, bias=True)
    (3): GELU()
    (4): Linear(in_features=2048, out_features=256, bias=True)
  )
  (last_layer): Linear(in_features=256, out_features=65536, bias=False)
)

class DINO[source]

DINO(aug_pipelines, large_crop_ids=[0, 1], cmom=0.9, tmom_start=0.996, tmom_end=1.0, tmom_sched=SchedCos, tpt_start=0.04, tpt_end=0.04, tpt_warmup_pct=0.0, tpt_sched=SchedLin, tps=0.1, freeze_last_layer=1, print_augs=False) :: Callback

Basic class handling tweaks of the training loop by changing a Learner in various events

Training schedule for DINO

fig,ax = plt.subplots(1,2,figsize=(15,5))
lr_sched = combine_scheds([0.1,0.9], [SchedLin(0.,1e-3), SchedCos(1e-3,1e-6)])
ax[0].plot([lr_sched(i) for i in np.linspace(0,1,100)]);ax[0].set_title('lr')
wd_sched = SchedCos(0.04,0.4)
ax[1].plot([wd_sched(i) for i in np.linspace(0,1,100)]);ax[1].set_title('wd');